33 research outputs found
Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems
This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR
Exploiting Rich Event Representation to Improve Event Causality Recognition
Event causality identification is an essential task for information extraction that has attracted growing attention. Early researchers were accustomed to combining the convolutional neural network or recurrent neural network models with external causal knowledge, but these methods ignore the importance of rich semantic representation of the event. The event is more structured, so it has more abundant semantic representation. We argue that the elements of the event, the interaction of the two events, and the context between the two events can enrich the event’s semantic representation and help identify event causality. Therefore, the effective semantic representation of events in event causality recognition deserves further study. To verify the effectiveness of rich event semantic representation for event causality identification, we proposed a model exploiting rich event representation to improve event causality recognition. Our model is based on multi-column convolutional neural networks, which integrate rich event representation, including event tensor representation, event interaction representation, and context-aware event representation. We designed various experimental models and conducted experiments on the Chinese emergency corpus, the most comprehensive annotation of events and event elements, enabling us to study the semantic representation of events from all aspects. The extensive experiments showed that the rich semantic representation of events achieved significant performance improvement over the baseline model on event causality recognition, indicating that the semantic representation of events plays an important role in event causality recognition
Hybrid Neural Network for Automatic Recovery of Elliptical Chinese Quantity Noun Phrases
In Mandarin Chinese, when the noun head appears in the context, a quantity noun phrase can be reduced to a quantity phrase with the noun head omitted. This phrase structure is called elliptical quantity noun phrase. The automatic recovery of elliptical quantity noun phrase is crucial in syntactic parsing, semantic representation and other downstream tasks. In this paper, we propose a hybrid neural network model to identify the semantic category for elliptical quantity noun phrases and realize the recovery of omitted semantics by supplementing concept categories. Firstly, we use BERT to generate character-level vectors. Secondly, Bi-LSTM is applied to capture the context information of each character and compress the input into the context memory history. Then CNN is utilized to capture the local semantics of n-gramswith various granularities. Based on the ChineseAbstractMeaning Representation (CAMR) corpus and Xinhua News Agency corpus, we construct a hand-labeled elliptical quantity noun phrase dataset and carry out the semantic recovery of elliptical quantity noun phrase on this dataset. The experimental results show that our hybrid neural network model can effectively improve the performance of the semantic complement for the elliptical quantity noun phrases
Recommended from our members
Determination of unit watershed size for use in small watershed hydrological modeling
Since the uniform rainfall over the watershed is the most fundamental assumption in small watershed modelling, the limitation on watershed size should be investigated. This study defines the unit watershed size as a dimensinal criteron which is associated with the storm size, and the extent and frequency of storm exclusion ( called spatial and temporal errors). Two approaches of determining average storm cell radius were proposed. One is related with the spatial variation in storm rainfall (DSIP), while another considers both spatial variation and storm exclusion events (RVIP). Both analytical and empirical solutions are obtained and the effect of multiplestorm events is discussed. The storm radius for Walnut Gulch is determined as 4.6 miles which is close to others' results. Given storm radius, a relationship between unit watershed size and the spatial and temporal errors is developed analytically. Based on this relationship, both selection and evaluation of unit watershed size are made possible. If the error levels are known, then the proper watershed size can be selected and if the watershed size is given, then the error levels can be evaluated. By using unit watershed size, the models of small watersheds may be extended to those of large watersheds.hydrology collectio
Checkpoint-Based Forward Recovery Using Lookahead Execution and Rollback Validation in pParallel and Distributed Systems
Office of Naval Research & NASA, N00014-91-J-1283NASA NAG-1-61
Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems
This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forward recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, lookahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct lookahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.Using checkpoint comparison for error detection calls for a static checkpoint placement in user programs. Checkpoint insertions based on the system clock produce dynamic checkpoints. A compiler-enhanced polling mechanism using instruction-based time measures is utilized to insert static checkpoints into user programs automatically. The technique has been implemented in a GNU CC compiler for Sun workstations. Experiments demonstrate that the approach provides stable checkpoint intervals and reproducible checkpoint placements with performance overhead comparable to a previous compiler-assisted dynamic scheme (CATCH).Obtaining a consistent recovery line is another issue to consider in this forward recovery strategy. Checkpointing concurrent processes independently may lead to an inconsistent recovery line that causes rollback propagations. In this thesis, an evolutionary approach to establish a consistent recovery line with low overhead is also described. This approach starts a checkpointing session by checkpointing each process locally and independently. During the checkpoint session, those local checkpoints may be updated, and this updating drives the recovery line evolve into a consistent line. Unlike the globally synchronized approach, the evolutionary approach requires no synchronization protocols to reach a consistent state for checkpointing. Unlike the communication synchronized approach, this approach avoids excessive checkpointing by providing a controllable checkpoint placement. Unlike the loosely synchronized schemes, this approach requires neither message retry nor message replay during recovery.U of I OnlyETDs are only available to UIUC Users without author permissio
Checkpoint-Based Forward Recovery Using Lookahead Execution and Rollback Validation in pParallel and Distributed Systems
Office of Naval Research & NASA, N00014-91-J-1283NASA NAG-1-61